What Do We Mean By “Rationality”?
I mean two things:
1. Epistemic rationality: systematically improving the accuracy of your beliefs.
2. Instrumental rationality: systematically achieving your values.
The first concept is simple enough. When you open your eyes and look at the room around you, you’ll locate your laptop in relation to the table, and you’ll locate a bookcase in relation to the wall. If something goes wrong with your eyes, or your brain, then your mental model might say there’s a bookcase where no bookcase exists, and when you go over to get a book, you’ll be disappointed.
This is what it’s like to have a false belief, a map of the world that doesn’t correspond to the territory. Epistemic rationality is about building accurate maps instead. This correspondence between belief and reality is commonly called “truth,” and I’m happy to call it that.1
Instrumental rationality, on the other hand, is about steering reality—sending the future where you want it to go. It’s the art of choosing actions that lead to outcomes ranked higher in your preferences. I sometimes call this “winning.”
So rationality is about forming true beliefs and making decisions that help you win.
(Where truth doesn’t mean “certainty,” since we can do plenty to increase the probability that our beliefs are accurate even though we’re uncertain; and winning doesn’t mean “winning at others’ expense,” since our values include everything we care about, including other people.)
When people say “X is rational!” it’s usually just a more strident way of saying “I think X is true” or “I think X is good.” So why have an additional word for “rational” as well as “true” and “good”?
An analogous argument can be given against using “true.” There is no need to say “it is true that snow is white” when you could just say “snow is white.” What makes the idea of truth useful is that it allows us to talk about the general features of map-territory correspondence. “True models usually produce better experimental predictions than false models” is a useful generalization, and it’s not one you can make without using a concept like “true” or “accurate.”
Similarly, “Rational agents make decisions that maximize the probabilistic expectation of a coherent utility function” is the kind of thought that depends on a concept of (instrumental) rationality, whereas “It’s rational to eat vegetables” can probably be replaced with “It’s useful to eat vegetables” or “It’s in your interest to eat vegetables.” We need a concept like “rational” in order to note general facts about those ways of thinking that systematically produce truth or value—and the systematic ways in which we fall short of those standards.
As we’ve observed in the previous essays, experimental psychologists sometimes uncover human reasoning that seems very strange. For example, someone rates the probability “Bill plays jazz” as less than the probability “Bill is an accountant who plays jazz.” This seems like an odd judgment, since any particular jazz-playing accountant is obviously a jazz player. But to what higher vantage point do we appeal in saying that the judgment is wrong ?
Experimental psychologists use two gold standards: probability theory, and decision theory.
Probability theory is the set of laws underlying rational belief. The mathematics of probability applies equally to “figuring out where your bookcase is” and “estimating how many hairs were on Julius Caesars head,” even though our evidence for the claim “Julius Caesar was bald” is likely to be more complicated and indirect than our evidence for the claim “theres a bookcase in my room.” It’s all the same problem of how to process the evidence and observations to update one’s beliefs. Similarly, decision theory is the set of laws underlying rational action, and is equally applicable regardless of what one’s goals and available options are.
Let “P(such-and-such)” stand for “the probability that such-and-such happens,” and “P(A,B)” for “the probability that both A and B happen.” Since it is a universal law of probability theory that P(A) ≥ P(A,B), the judgment that P(Bill plays jazz) is less than P(Bill plays jazz, Bill is an accountant) is labeled incorrect.
To keep it technical, you would say that this probability judgment is non-Bayesian. Beliefs that conform to a coherent probability distribution, and decisions that maximize the probabilistic expectation of a coherent utility function, are called “Bayesian.”
I should emphasize that this isn’t the notion of rationality thats common in popular culture. People may use the same string of sounds, “ra-tio-nal,” to refer to “acting like Mr. Spock of Star Trek” and “acting like a Bayesian”; but this doesn’t mean that acting Spock-like helps one hair with epistemic or instrumental rationality.2
All of this does not quite exhaust the problem of what is meant in practice by “rationality,” for two major reasons:
First, the Bayesian formalisms in their full form are computationally intractable on most real-world problems. No one can actually calculate and obey the math, any more than you can predict the stock market by calculating the movements of quarks.
This is why there is a whole site called “Less Wrong,” rather than a single page that simply states the formal axioms and calls it a day. There’s a whole further art to finding the truth and accomplishing value from inside a human mind: we have to learn our own flaws, overcome our biases, prevent ourselves from self-deceiving, get ourselves into good emotional shape to confront the truth and do what needs doing, et cetera, et cetera.
Second, sometimes the meaning of the math itself is called into question. The exact rules of probability theory are called into question by, e.g., anthropic problems in which the number of observers is uncertain. The exact rules of decision theory are called into question by, e.g., Newcomblike problems in which other agents may predict your decision before it happens.3
In cases where our best formalizations still come up short, we can return to simpler ideas like “truth” and “winning.” If you are a scientist just beginning to investigate fire, it might be a lot wiser to point to a campfire and say “Fire is that orangey-bright hot stuff over there,” rather than saying “I define fire as an alchemical transmutation of substances which releases phlogiston.” You certainly shouldn’t ignore something just because you can’t define it. I can’t quote the equations of General Relativity from memory, but nonetheless if I walk off a cliff, I’ll fall. And we can say the same of cognitive biases and other obstacles to truth—they won’t hit any less hard if it turns out we can’t define compactly what “irrationality” is.
In cases like these, it is futile to try to settle the problem by coming up with some new definition of the word “rational” and saying, “Therefore my preferred answer, by definition, is what is meant by the word ‘rational.’ ” This simply raises the question of why anyone should pay attention to your definition. I’m not interested in probability theory because it is the holy word handed down from Laplace. I’m interested in Bayesian-style belief-updating (with Occam priors) because I expect that this style of thinking gets us systematically closer to, you know, accuracy, the map that reflects the territory.
And then there are questions of how to think that seem not quite answered by either probability theory or decision theory—like the question of how to feel about the truth once you have it. Here, again, trying to define “rationality” a particular way doesn’t support an answer, but merely presumes one.
I am not here to argue the meaning of a word, not even if that word is “rationality.” The point of attaching sequences of letters to particular concepts is to let two people communicate—to help transport thoughts from one mind to another. You cannot change reality, or prove the thought, by manipulating which meanings go with which words.
So if you understand what concept I am generally getting at with this word “rationality,” and with the sub-terms “epistemic rationality” and “instrumental rationality,” we have communicated: we have accomplished everything there is to accomplish by talking about how to define “rationality.” What’s left to discuss is not what meaning to attach to the syllables “ra-tio-na-li-ty”; what’s left to discuss is what is a good way to think.
If you say, “It’s (epistemically) rational for me to believe X, but the truth is Y,” then you are probably using the word “rational” to mean something other than what I have in mind. (E.g., “rationality” should be consistent under reflection—“rationally” looking at the evidence, and “rationally” considering how your mind processes the evidence, shouldn’t lead to two different conclusions.)
Similarly, if you find yourself saying, “The (instrumentally) rational thing for me to do is X, but the right thing for me to do is Y,” then you are almost certainly using some other meaning for the word “rational” or the word “right.” I use the term “rationality” normatively, to pick out desirable patterns of thought.
In this case—or in any other case where people disagree about word meanings—you should substitute more specific language in place of “rational”: “The self-benefiting thing to do is to run away, but I hope I would at least try to drag the child off the railroad tracks,” or “Causal decision theory as usually formulated says you should two-box on Newcomb’s Problem, but I’d rather have a million dollars.”
In fact, I recommend reading back through this essay, replacing every instance of “rational” with “foozal,” and seeing if that changes the connotations of what I’m saying any. If so, I say: strive not for rationality, but for foozality.
The word “rational” has potential pitfalls, but there are plenty of non-borderline cases where “rational” works fine to communicate what I’m getting at. Likewise “irrational.” In these cases I’m not afraid to use it.
Yet one should be careful not to overuse that word. One receives no points merely for pronouncing it loudly. If you speak overmuch of the Way, you will not attain it.
1 For a longer discussion of truth, see “The Simple Truth” at the very end of this volume.
2 The idea that rationality is about strictly privileging verbal reasoning over feelings is a case in point. Bayesian rationality applies to urges, hunches, perceptions, and wordless intuitions, not just to assertions.
I gave the example of opening your eyes, looking around you, and building a mental model of a room containing a bookcase against the wall. The modern idea of rationality is general enough to include your eyes and your brains visual areas as things-that-map, and to include instincts and emotions in the belief-and-goal calculus.
3 For an informal statement of Newcomb’s Problem, see Jim Holt, “Thinking Inside the Boxes,” Slate, 2002, http://www.slate.com/articles/arts/egghead/2002/02/thinkinginside_the_boxes.single.html.
- Why Our Kind Can’t Cooperate by 20 Mar 2009 8:37 UTC; 291 points) (
- Elements of Rationalist Discourse by 12 Feb 2023 7:58 UTC; 223 points) (
- References & Resources for LessWrong by 10 Oct 2010 14:54 UTC; 167 points) (
- Build Small Skills in the Right Order by 17 Apr 2011 23:01 UTC; 167 points) (
- Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality by 14 Sep 2010 16:17 UTC; 154 points) (
- Being a Robust Agent by 18 Oct 2018 7:00 UTC; 151 points) (
- The Cognitive Science of Rationality by 12 Sep 2011 20:48 UTC; 136 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- What Cost for Irrationality? by 1 Jul 2010 18:25 UTC; 93 points) (
- LessWrong FAQ by 14 Jun 2019 19:03 UTC; 90 points) (
- Of Exclusionary Speech and Gender Politics by 21 Jul 2009 7:22 UTC; 90 points) (
- Counterfactual Mugging by 19 Mar 2009 6:08 UTC; 80 points) (
- Bayesian Mindset by 21 Dec 2021 19:54 UTC; 73 points) (EA Forum;
- The Neuroscience of Desire by 9 Apr 2011 19:08 UTC; 73 points) (
- Costs and Benefits of Scholarship by 22 Mar 2011 2:19 UTC; 72 points) (
- Get Curious by 24 Feb 2012 5:10 UTC; 71 points) (
- Elements of Rationalist Discourse by 14 Feb 2023 3:39 UTC; 68 points) (EA Forum;
- Curating “The Epistemic Sequences” (list v.0.1) by 23 Jul 2022 22:17 UTC; 65 points) (
- Of Gender and Rationality by 16 Apr 2009 0:56 UTC; 62 points) (
- The Amanda Knox Test: How an Hour on the Internet Beats a Year in the Courtroom by 13 Dec 2009 4:16 UTC; 58 points) (
- About Less Wrong by 23 Feb 2009 23:30 UTC; 57 points) (
- The Best of Don’t Worry About the Vase by 13 Dec 2023 12:50 UTC; 55 points) (
- Solomonoff Cartesianism by 2 Mar 2014 17:56 UTC; 51 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- Inherited Improbabilities: Transferring the Burden of Proof by 24 Nov 2010 3:40 UTC; 46 points) (
- First Lighthaven Sequences Reading Group by 28 Aug 2024 4:56 UTC; 45 points) (
- A Call for Constant Vigilance by 3 Apr 2013 9:52 UTC; 40 points) (
- Deciding on our rationality focus by 22 Jul 2009 6:27 UTC; 39 points) (
- A Suggested Reading Order for Less Wrong [2011] by 8 Jul 2011 1:40 UTC; 38 points) (
- Turning 22 in the Pre-Apocalypse by 22 Aug 2024 20:28 UTC; 37 points) (
- Trying to deconfuse some core AI x-risk problems by 17 Oct 2023 18:36 UTC; 34 points) (
- The Core Tags by 22 Apr 2020 1:23 UTC; 33 points) (
- (Summary) Sequence Highlights—Thinking Better on Purpose by 2 Aug 2022 17:45 UTC; 33 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- Epistemic and Instrumental Tradeoffs by 19 May 2013 7:49 UTC; 29 points) (
- Escaping Your Past by 22 Apr 2009 21:15 UTC; 28 points) (
- Rationality Power Tools by 19 Sep 2010 6:20 UTC; 27 points) (
- Rational vs Reasonable by 11 Jul 2015 3:31 UTC; 27 points) (
- 26 Feb 2023 10:34 UTC; 26 points) 's comment on On Philosophy Tube’s Video on Effective Altruism by (EA Forum;
- 28 Sep 2020 7:04 UTC; 25 points) 's comment on Honoring Petrov Day on LessWrong, in 2020 by (
- 25 Jul 2013 7:20 UTC; 24 points) 's comment on Making Rationality General-Interest by (
- The Foundational Toolbox for Life: Introduction by 22 Jun 2019 6:11 UTC; 23 points) (
- Scope Insensitivity Judo by 19 Jul 2019 17:33 UTC; 22 points) (
- The Pleasures of Rationality by 28 Oct 2011 2:35 UTC; 21 points) (
- 7 Mar 2012 19:34 UTC; 20 points) 's comment on Rationally Irrational by (
- 16 Dec 2010 19:52 UTC; 18 points) 's comment on What do you mean by rationalism? by (
- Zen and Rationality: Skillful Means by 21 Nov 2020 2:38 UTC; 18 points) (
- Map and Territory: Summary and Thoughts by 5 Dec 2020 8:21 UTC; 16 points) (
- Rationality Reading Group: Introduction and A: Predictably Wrong by 17 Apr 2015 1:40 UTC; 16 points) (
- Why *I* fail to act rationally by 26 Mar 2009 3:56 UTC; 15 points) (
- The Cognitive Science of Rationality by 12 Sep 2011 10:35 UTC; 14 points) (EA Forum;
- The Dangers of Partial Knowledge of the Way: Failing in School by 6 Jul 2009 15:16 UTC; 14 points) (
- 16 Dec 2010 20:54 UTC; 14 points) 's comment on What do you mean by rationalism? by (
- 10 Aug 2011 21:30 UTC; 14 points) 's comment on Rationality and Relationships by (
- Feeling (instrumentally) Rational by 16 May 2024 18:56 UTC; 14 points) (
- [Altruist Support] How to determine your utility function by 1 May 2011 6:33 UTC; 13 points) (
- 2 Mar 2010 19:44 UTC; 13 points) 's comment on Rationality quotes: March 2010 by (
- Winning vs Truth – Infohazard Trade-Offs by 7 Mar 2020 22:49 UTC; 12 points) (
- Heuristic is not a bad word by 6 Apr 2009 6:55 UTC; 12 points) (
- 14 Dec 2009 22:12 UTC; 11 points) 's comment on A question of rationality by (
- 10 Sep 2013 1:48 UTC; 10 points) 's comment on Three ways CFAR has changed my view of rationality by (
- 16 Dec 2010 19:53 UTC; 10 points) 's comment on What do you mean by rationalism? by (
- 27 May 2011 19:23 UTC; 10 points) 's comment on The 48 Rules of Power; Viable? by (
- Games for Rationalists by 12 Sep 2013 17:41 UTC; 10 points) (
- 30 Dec 2011 3:39 UTC; 9 points) 's comment on Stupid Questions Open Thread by (
- Rationality Compendium: Principle 1 - A rational agent, given its capabilities and the situation it is in, is one that thinks and acts optimally by 23 Aug 2015 8:01 UTC; 9 points) (
- 27 Oct 2010 11:42 UTC; 9 points) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 11 Aug 2010 17:01 UTC; 9 points) 's comment on Is it rational to be religious? Simulations are required for answer. by (
- 2 Jul 2011 5:12 UTC; 8 points) 's comment on I’m becoming intolerant. Help. by (
- What Exactly Do We Mean By “Rationality”? by 11 Sep 2015 13:16 UTC; 8 points) (
- Bayesian Buddhism: a path to optimal enlightenment by 8 Oct 2010 21:38 UTC; 8 points) (
- 4 Jan 2012 1:15 UTC; 8 points) 's comment on Welcome to Less Wrong! (2012) by (
- 16 Aug 2011 7:04 UTC; 8 points) 's comment on The Goal of the Bayesian Conspiracy by (
- 9 Jan 2012 3:19 UTC; 7 points) 's comment on What Curiosity Looks Like by (
- 3 Jul 2012 15:37 UTC; 7 points) 's comment on Real World Solutions to Prisoners’ Dilemmas by (
- 29 Mar 2009 5:59 UTC; 7 points) 's comment on When It’s Not Right to be Rational by (
- Rational = true? by 9 Feb 2011 9:59 UTC; 7 points) (
- 26 Jul 2012 19:31 UTC; 7 points) 's comment on The curse of identity by (
- 11 Aug 2010 15:27 UTC; 7 points) 's comment on Is it rational to be religious? Simulations are required for answer. by (
- 10 Oct 2012 4:00 UTC; 7 points) 's comment on Meta-rationality by (
- 27 Dec 2012 21:39 UTC; 6 points) 's comment on Intelligence explosion in organizations, or why I’m not worried about the singularity by (
- 29 Jun 2020 15:51 UTC; 6 points) 's comment on A reply to Agnes Callard by (
- Explaining the Rationalist Movement to the Uninitiated by 19 Apr 2020 20:53 UTC; 5 points) (
- High school students and epistemic rationality by 15 Mar 2014 17:40 UTC; 5 points) (
- Mini-review: ‘Judgment and Decision Making as a Skill’ by 18 Feb 2012 10:42 UTC; 5 points) (
- 14 Dec 2021 6:14 UTC; 5 points) 's comment on Is “gears-level” just a synonym for “mechanistic”? by (
- 19 Mar 2011 3:29 UTC; 5 points) 's comment on Can we stop using the word “rationalism”? by (
- 18 May 2012 18:59 UTC; 5 points) 's comment on Open Thread, May 16-31, 2012 by (
- 2 Sep 2010 17:16 UTC; 5 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 21 Jul 2013 18:33 UTC; 5 points) 's comment on The idiot savant AI isn’t an idiot by (
- 8 Mar 2020 9:14 UTC; 4 points) 's comment on Winning vs Truth – Infohazard Trade-Offs by (
- 20 Mar 2023 21:50 UTC; 4 points) 's comment on AGI will know: Humans are not Rational by (
- 6 Mar 2010 10:01 UTC; 4 points) 's comment on Newcomb’s Problem and Regret of Rationality by (
- [SEQ RERUN] What Do We Mean By “Rationality”? by 27 Mar 2013 4:33 UTC; 4 points) (
- 18 Dec 2013 20:34 UTC; 4 points) 's comment on Open thread for December 17-23, 2013 by (
- 4 Jan 2012 19:13 UTC; 4 points) 's comment on [META] ‘Rational’ vs ‘Optimized’ by (
- 6 Feb 2013 19:03 UTC; 4 points) 's comment on Avoiding Your Belief’s Real Weak Points by (
- 16 May 2012 13:33 UTC; 4 points) 's comment on Open Thread, May 16-31, 2012 by (
- 27 Sep 2023 21:29 UTC; 4 points) 's comment on Petrov Day [Spoiler Warning] by (
- 15 Oct 2010 7:06 UTC; 4 points) 's comment on LW favorites by (
- Norfolk Social—VA Rationalists by 9 Oct 2022 23:58 UTC; 4 points) (
- 14 Dec 2011 20:08 UTC; 3 points) 's comment on How to Not Lose an Argument by (
- 14 Dec 2011 14:03 UTC; 3 points) 's comment on How to Not Lose an Argument by (
- 19 Mar 2009 21:40 UTC; 3 points) 's comment on How to Not Lose an Argument by (
- 2 Jul 2010 12:43 UTC; 3 points) 's comment on Open Thread: July 2010 by (
- 21 Oct 2021 4:52 UTC; 3 points) 's comment on My experience at and around MIRI and CFAR (inspired by Zoe Curzi’s writeup of experiences at Leverage) by (
- 9 Jul 2012 8:30 UTC; 3 points) 's comment on [SEQ RERUN] Fake Norms, or “Truth” vs. Truth by (
- Code Switch by 29 Sep 2018 0:32 UTC; 3 points) (
- 23 Sep 2013 10:03 UTC; 3 points) 's comment on Help us Optimize the Contents of the Sequences eBook by (
- 9 Nov 2012 18:57 UTC; 3 points) 's comment on Welcome to Less Wrong! (July 2012) by (
- 14 Apr 2012 22:58 UTC; 3 points) 's comment on Belief as Attire by (
- 30 Aug 2013 22:19 UTC; 3 points) 's comment on Rewriting the sequences? by (
- 4 Apr 2009 5:06 UTC; 2 points) 's comment on Rationality is Systematized Winning by (
- 24 Jul 2022 13:00 UTC; 2 points) 's comment on flaritza’s Shortform by (
- Comments for “Rationality” by 16 Mar 2009 22:34 UTC; 2 points) (
- 1 Nov 2012 18:22 UTC; 2 points) 's comment on Politics is the Mind-Killer by (
- Meetup : Perth, Australia: More Wrong by 17 Aug 2014 9:58 UTC; 2 points) (
- 27 May 2010 19:19 UTC; 2 points) 's comment on Open Thread: May 2010, Part 2 by (
- 30 May 2010 13:01 UTC; 2 points) 's comment on Abnormal Cryonics by (
- Meetup : Frankfurt Meet-Up by 4 Jul 2015 16:45 UTC; 2 points) (
- 27 Jul 2016 4:14 UTC; 2 points) 's comment on Open thread, Jul. 25 - Jul. 31, 2016 by (
- 21 Nov 2010 18:54 UTC; 2 points) 's comment on Can cryoprotectant toxicity be crowd-sourced? by (
- 11 Feb 2016 22:09 UTC; 2 points) 's comment on Is Spirituality Irrational? by (
- 25 Mar 2010 9:05 UTC; 2 points) 's comment on Levels of communication by (
- Meetup : Frankfurt Meetup Revival by 20 Jun 2015 11:00 UTC; 2 points) (
- 25 Mar 2010 23:48 UTC; 2 points) 's comment on Overcoming the mind-killer by (
- 3 Apr 2009 22:51 UTC; 2 points) 's comment on Open Thread: April 2009 by (
- A Pragmatic Epistemology by 5 Aug 2014 5:43 UTC; 2 points) (
- 19 Aug 2023 0:02 UTC; 1 point) 's comment on Open Thread: July—September 2023 by (EA Forum;
- How Is EA Rational? by 13 Oct 2022 16:21 UTC; 1 point) (EA Forum;
- 19 Apr 2009 13:09 UTC; 1 point) 's comment on The True Epistemic Prisoner’s Dilemma by (
- Meetup : West LA Meetup 08-23-2011 by 19 Aug 2011 17:45 UTC; 1 point) (
- 14 Dec 2011 15:03 UTC; 1 point) 's comment on How to Not Lose an Argument by (
- 27 Aug 2010 19:36 UTC; 1 point) 's comment on The Costs of Rationality by (
- 14 Aug 2018 13:20 UTC; 1 point) 's comment on Rationality Retreat in Europe: Gauging Interest by (
- 16 Jan 2011 17:46 UTC; 1 point) 's comment on Trying to hide bad signaling? To the Dark Side, lead you it will. by (
- 24 Dec 2009 20:32 UTC; 1 point) 's comment on If reason told you to jump off a cliff, would you do it? by (
- 4 Jan 2012 14:58 UTC; 1 point) 's comment on Welcome to Less Wrong! (2012) by (
- 20 Sep 2010 20:55 UTC; 1 point) 's comment on Compartmentalization in epistemic and instrumental rationality by (
- 8 Apr 2011 18:52 UTC; 1 point) 's comment on A Sense That More Is Possible by (
- Best of Don’t Worry About the Vase by 10 Sep 2017 12:18 UTC; 1 point) (
- 23 Sep 2012 8:24 UTC; 1 point) 's comment on High School Lecture—Report by (
- 29 Jul 2009 20:44 UTC; 1 point) 's comment on The Trolley Problem in popular culture: Torchwood Series 3 by (
- 17 Jan 2010 0:46 UTC; 1 point) 's comment on The Wannabe Rational by (
- 18 Jul 2010 22:53 UTC; 1 point) 's comment on Fight Zero-Sum Bias by (
- 13 Apr 2015 21:48 UTC; 1 point) 's comment on On not getting a job as an option by (
- 29 Mar 2009 6:26 UTC; 1 point) 's comment on When It’s Not Right to be Rational by (
- 5 Apr 2011 21:37 UTC; 1 point) 's comment on Recent de-convert saturated by religious community; advice? by (
- 6 Mar 2010 4:27 UTC; 1 point) 's comment on Rationality quotes: March 2010 by (
- 24 Mar 2009 11:12 UTC; 1 point) 's comment on BHTV: Yudkowsky & Adam Frank on “religious experience” by (
- 15 Mar 2013 15:08 UTC; 1 point) 's comment on Decision Theory FAQ by (
- 11 Dec 2012 22:53 UTC; 1 point) 's comment on By Which It May Be Judged by (
- 2 Apr 2010 8:28 UTC; 0 points) 's comment on What is Rationality? by (
- 2 Apr 2011 18:47 UTC; 0 points) 's comment on Q: What has Rationality Done for You? by (
- 1 Apr 2011 15:32 UTC; 0 points) 's comment on HP:MoR Audio Book Pilot by (
- 4 Aug 2012 10:43 UTC; 0 points) 's comment on Should I believe what the SIAI claims? by (
- 17 Jan 2017 18:28 UTC; 0 points) 's comment on Open thread, Jan. 16 - Jan. 22, 2016 by (
- 14 Mar 2014 19:52 UTC; 0 points) 's comment on On not diversifying charity by (
- 5 Jul 2011 14:13 UTC; 0 points) 's comment on ‘Is’ and ‘Ought’ and Rationality by (
- 7 Apr 2012 5:44 UTC; 0 points) 's comment on Rationally Irrational by (
- 28 Jun 2011 3:30 UTC; 0 points) 's comment on The Wannabe Rational by (
- 26 Feb 2010 4:21 UTC; 0 points) 's comment on When None Dare Urge Restraint by (
- 20 Mar 2011 1:23 UTC; 0 points) 's comment on Can we stop using the word “rationalism”? by (
- 2 Feb 2014 6:30 UTC; 0 points) 's comment on Skepticism about Probability by (
- 14 Jul 2014 0:48 UTC; 0 points) 's comment on Terminology Thread (or “name that pattern”) by (
- 3 Sep 2010 9:12 UTC; 0 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 22 Nov 2009 22:38 UTC; 0 points) 's comment on Request For Article: Many-Worlds Quantum Computing by (
- 27 May 2009 4:57 UTC; 0 points) 's comment on Open Thread: May 2009 by (
- 16 Jul 2015 12:35 UTC; 0 points) 's comment on The Just-Be-Reasonable Predicament by (
- 20 Nov 2010 8:51 UTC; 0 points) 's comment on Rationality and being child-free by (
- 17 Mar 2009 17:06 UTC; 0 points) 's comment on Dead Aid by (
- 19 Feb 2011 1:03 UTC; -1 points) 's comment on Strong substrate independence: a thing that goes wrong in my mind when exposed to philosophy by (
- GroupThink, Theism … and the Wiki by 13 Apr 2009 17:28 UTC; -2 points) (
- The Domain of Politics by 21 Jul 2013 18:30 UTC; -2 points) (
- 13 Jul 2009 23:26 UTC; -6 points) 's comment on Recommended reading for new rationalists by (
- 10 Oct 2012 14:14 UTC; -8 points) 's comment on Meta-rationality by (
- Actually Rational & Kind Sequences Reading Group by 31 Aug 2024 4:21 UTC; -55 points) (
Note: this post originally appeared in a context without comments on Overcoming Bias. Old comments on this post are over here.
How should we deal with the cases when epistemic rationality contradicts instrumental? For example, we may want to use placebo effect because one of our values is that healthy is better than sick, and less pain is better than more pain. But placebo effect is based on the fact that we believe pill to be a working medicine that is wrong. Is there any way to satisfy both epistemic and instrumental rationality?
It depends from case to case, I would think. There are instances when you’re most probably benefited by trading off epistemic rationality for instrumental, but in cases where it’s too chaotic to get a good estimate and the tradeoff seems close to equal, I would personally err on the side of epistemic rationality. Brains are complicated, forcing a placebo effect might have ripple effects across your psyche like an increased tendency to shut down that voice in your head that talks when you know your belief is wrong on some level (very speculative example), for limited short-term gain.
Thank you, wonderful series!
It seems to me that this is not a contradiction of two rationalities. Rather, it is similar to the resonance of doubt. If a placebo works when you believe in it, that means that if you believe in it, it will be true. Here you need a reverse example, when if you believe that something is true, then it becomes false. (Believing that something is safe again won’t work, since you just need to not act more carelessly based on the safety of something, which is just a matter of instrumental rationality)
If you believe that the placebo works, it works. You’re right in believing it works.
If you don’t believe that the placebo works, it doesn’t work. You’re right believing it doesn’t work
If you believe that the sky is blue, you’re right.
If you believe that the sky is green, it’s still blue, you’re wrong.
Truths that have humans involve some amounts of reflexivity.
I’d say you shouldn’t force yourself to believe something (epistemic rationality) to achieve a goal (instrumental rationality). This is because, in my view, human minds are addicted to feeling consistent, so it’d be very difficult (i.e., resource expensive) to believe a drug works when you know it doesn’t.
What does it even mean to believe something is true when you know it’s false? I don’t know. Whatever it means, it’d have to be a psychological thing rather than an epistemological one. My personal recommendation is to only believe things that are true. This is because the modern environment we live in generally benefits rational behavior based on knowledge anyway, so the problem doesn’t need to surface.
The essay reminds me of the book 𝑳𝒂𝒏𝒈𝒖𝒂𝒈𝒆 𝒐𝒏 𝑻𝒉𝒐𝒖𝒈𝒉𝒕 𝒂𝒏𝒅 𝑨𝒄𝒕𝒊𝒐𝒏 by Samuel Hayakawa. The author also used the map and territory metaphor in the book.
Eliezer has elsewhere mentioned it as having been an influence in his youth. The saying “the map is not the territory” originated with Korzybski, and Hayakawa’s book is a popularisation of his work.
Thank you for the reference. I just stumbled into this website and found the essays interesting to me. As a Chinese reader there is not so many this kind of contents in chinese web. Really lucky to enjoy the thought while improving my English.
Welcome! There’s a monthly open thread where newcomers are invited to introduce themselves.
The same word can mean many things, words that have convergent evolution in their sounding but different meanings are spelled differently for a reason. Propaganda is manipulating the meaning of things, this is often done with slogans and words. Lies are the changed meaning of things to shape reality. Reality is a perception from a particular perspective as in the anthropic problems, it is relational not necessarily objective.
Creating a definition can be done, and is at times useful to make sense of and verify the likeness of maps and territory contained in other people’s heads. Such to confirm the maps of language and words are congruent.
If things can not be defined the definition is left up to the individual and open to interpretation. The utility of this experiential approach allows individuals to engender their own ideas. When reading around a philosophical work and engaging with the material you build a representation of its meaning. As you do every time you read or write a word. Even where philosophical works have definitions there is often further assumed knowledge to decode and grasp the work in its entirety. In both cases where there is a formal definition, examples and implementation of its usage, this adds meaning and information.
Where the probability of controversy high and the ability to quell controversy is low, the probability of formal defence of ideas is reduced. There is a ceiling but unto time to which, things can be defended, defined or explained.
We need not provide and defend formal definitions, a definition is defined through usage. If the probability of a definition causing controversy is high and defining it has low utility the importance of a formal definition is decreased. Leaving things in ambiguity or with multiple degrees of interpretation limits reprisals.
If you don’t have anything nice to say don’t allow it to take shape, to become definitive. This is besides the point that communication can still transmit useful information.
The fact that there is no definition is the definition and is evidence for the definition. You can define things, but in the experiential sense what can you do with information that is wrong to steel-man it, to give it utility and make it useful.
If the benefit of a definition providing epistemic accuracy is lower than the instrumental utility of not defining, why define it?
Ultimately if we are to become rational the worst way to brainstorm is to have an anchoring effect around a definition of rationality that also causes controversy. As in the Stability–instability paradox, not naming something creates more names not of the thing in actuality but ideas around it. We are the Blind men yet but touching the elephant that is rationality.
Wouldn’t it be correct to say that it would be ‘instrumentally rational’ to run away in this case? It sounds rational to me, as far as you ‘winning’ means you ‘surviving’.
I think by winning, he meant: “art of choosing actions that lead to outcomes ranked higher in your preferences”, though I don’t completely agree with this word choice of “winning” which could be ambiguous/causing confusion.
A bit unrelated, but more of a general comment on this—in my belief, I think people generally have unconscious preferences, and knowing/acknowledging these before weighing out preferences are very important, even if some preferences are short term.
Is the last sentence rational?
The one that says “If you speak overmuch of the Way, you will not attain it.”
This is a reference to Taoism (the tao = the Way). I believe it is a different approach to the tenet I’ve heard expressed as “The Tao that can be explained is not the true Tao”. I believe the reference is meant to remind us that the point here is to end up performing less wrong rational thinking, not just talking about it.
great post, just wanted to point out a typo here: “I cant quote the equations of General Relativity from memory, but nonetheless if I walk off a cliff, Ill fall. ”
it should be “I’ll fall”. good work otherwise.
(Fixed, thank you!)
Nice discussion. Thanks for putting this together. I learned something about Epistemic rationality vs Instrumental rationality.
The bit about the sky being blue or green seems to beg the question of a justification for objective truth as championed by St Augustine and Leibniz as opposed to arguments for subjective reality as championed by the Cynics and Skeptics and, more recently, the Frankfort school. One could make the case that the sky appears green to one but blue to another.
This topic comes up in many places throughout the history of thought. I’m actually working on a post for my blog exploring that at www.SimplyUrban.Org.